Adversarial Deep Embedded Clustering: On a Better Trade-off Between Feature Randomness and Feature Drift
نویسندگان
چکیده
To overcome the absence of concrete supervisory signals, deep clustering models construct their own labels based on self-supervision and pseudo-supervision. However, applying these techniques can cause Feature Randomness Drift. In this paper, we formally characterize two new concepts. On one hand, takes place when a considerable portion pseudo-labels is deemed to be random. regard, trained model learn non-representative features. other Drift pseudo-supervised reconstruction losses are jointly minimized. While penalizing loss aims preserve all inherent data information, optimizing embedded-clustering objective drops latent between-cluster variances. Due compromise, clustering-friendly representations easily drifted. context, propose ADEC (Adversarial Deep Embedded Clustering) novel autoencoder-based model, which relies discriminator network reduce random features while avoiding drifting effect. Our metrics $\Delta _{FR}$ _{FD}$ allows to, respectively, assess level We empirically demonstrate suitability our handling problems using benchmark real datasets. Experimental results validate that outperforms state-of-the-art methods.
منابع مشابه
Adversarial Feature Learning
The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing generators learn to “linearize semantics” in the latent space of such models. Intuitively, such latent spaces may serve as useful feature representation...
متن کاملFeature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
Although deep neural networks (DNNs) have achieved great success in many tasks, they can often be fooled by adversarial examples that are generated by adding small but purposeful distortions to natural examples. Previous studies to defend against adversarial examples mostly focused on refining the DNN models, but have either shown limited success or required expensive computation. We propose a ...
متن کاملEnhancing Clustering Performance of Feature Maps Using Randomness
This paper presents an enhancement made to a high dimensional variant of a growing self organizing map called the High Dimensional Growing Self Organizing Map (HDGSOM) that enhances the clustering of the algorithm. The enhancement is based on randomness that expedites the self organizing process by moving the inputs out from local minima producing better clusters within a shorter training time....
متن کاملEmbedded Unsupervised Feature Selection
Sparse learning has been proven to be a powerful technique in supervised feature selection, which allows to embed feature selection into the classification (or regression) problem. In recent years, increasing attention has been on applying spare learning in unsupervised feature selection. Due to the lack of label information, the vast majority of these algorithms usually generate cluster labels...
متن کاملUnsupervised Learning of Deep Feature Representation for Clustering Egocentric Actions
Popularity of wearable cameras in life logging, law enforcement, assistive vision and other similar applications is leading to explosion in generation of egocentric video content. First person action recognition is an important aspect of automatic analysis of such videos. Annotating such videos is hard, not only because of obvious scalability constraints, but also because of privacy issues ofte...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Knowledge and Data Engineering
سال: 2022
ISSN: ['1558-2191', '1041-4347', '2326-3865']
DOI: https://doi.org/10.1109/tkde.2020.2997772